associative learning
Mimicking associative learning of rats via a neuromorphic robot in open field maze using spatial cell models
Liu, Tianze, Siddique, Md Abu Bakr, An, Hongyu
--Data-driven Artificial Intelligence (AI) approaches have exhibited remarkable prowess across various cognitive tasks using extensive training data. However, the reliance on large datasets and neural networks presents challenges such as high-power consumption and limited adaptability, particularly in SWaP-constrained applications like planetary exploration. T o address these issues, we propose enhancing the autonomous capabilities of intelligent robots by emulating the associative learning observed in animals. Associative learning enables animals to adapt to their environment by memorizing concurrent events. By replicating this mechanism, neuromorphic robots can navigate dynamic environments autonomously, learning from interactions to optimize performance. This paper explores the emulation of associative learning in rodents using neuromorphic robots within open-field maze environments, leveraging insights from spatial cells such as place and grid cells. By integrating these models, we aim to enable online associative learning for spatial tasks in real-time scenarios, bridging the gap between biological spatial cognition and robotics for advancements in autonomous systems.
Why we should thank pigeons for our AI breakthroughs
People looking for precursors to artificial intelligence often point to science fiction by authors like Isaac Asimov or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is Skinner's research with pigeons in the middle of the 20th century. Skinner believed that association--learning, through trial and error, to link an action with a punishment or reward--was the building block of every behavior, not just in pigeons but in all living organisms, including human beings. His "behaviorist" theories fell out of favor with psychologists and animal researchers in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the artificial-intelligence tools from leading firms like Google and OpenAI. These companies' programs are increasingly incorporating a kind of machine learning whose core concept--reinforcement--is taken directly from Skinner's school of psychology and whose main architects, the computer scientists Richard Sutton and Andrew Barto, won the 2024 Turing Award, an honor widely considered to be the Nobel Prize of computer science.
Associative Learning via Inhibitory Search
ALVIS is a reinforcement-based connectionist architecture that learns associative maps in continuous multidimensional environ(cid:173) ments. The discovered locations of positive and negative rein(cid:173) forcements are recorded in "do be" and "don't be" subnetworks, respectively. The outputs of the subnetworks relevant to the cur(cid:173) rent goal are combined and compared with the current location to produce an error vector. This vector is backpropagated through a motor-perceptual mapping network. AL VIS is demonstrated with a simulated robot posed a target-seeking task.
Associative Learning for Network Embedding
Liang, Yuchen, Krotov, Dmitry, Zaki, Mohammed J.
The network embedding task is to represent the node in the network as a low-dimensional vector while incorporating the topological and structural information. Most existing approaches solve this problem by factorizing a proximity matrix, either directly or implicitly. In this work, we introduce a network embedding method from a new perspective, which leverages Modern Hopfield Networks (MHN) for associative learning. Our network learns associations between the content of each node and that node's neighbors. These associations serve as memories in the MHN. The recurrent dynamics of the network make it possible to recover the masked node, given that node's neighbors. Our proposed method is evaluated on different downstream tasks such as node classification and linkage prediction. The results show competitive performance compared to the common matrix factorization techniques and deep learning based methods.
- North America > United States > District of Columbia > Washington (0.05)
- North America > United States > New York > Rensselaer County > Troy (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Ortus: an Emotion-Driven Approach to (artificial) Biological Intelligence
McDonald, Andrew W. E., Grimes, Sean, Breen, David E.
Ortus is a simple virtual organism that also serves as an initial framework for investigating and developing biologically-based artificial intelligence. Born from a goal to create complex virtual intelligence and an initial attempt to model C. elegans, Ortus implements a number of mechanisms observed in organic nervous systems, and attempts to fill in unknowns based upon plausible biological implementations and psychological observations. Implemented mechanisms include excitatory and inhibitory chemical synapses, bidirectional gap junctions, and Hebbian learning with its Stentian extension. We present an initial experiment that showcases Ortus' fundamental principles; specifically, a cyclic respiratory circuit, and emotionally-driven associative learning with respect to an input stimulus. Finally, we discuss the implications and future directions for Ortus and similar systems.
Darwinian Machine Learning: Principles of Machine Learning in Evo-devo, Evo-eco and Evolutionary Transitions in Individuality
Current evolutionary theory describes a Darwinian machine – i.e., heritable variation in reproductive success that assumes fixed mechanisms of variation and selection operating on a fixed reproductive unit. But, in fact, none of these mechanisms is fixed in nature. For example, the distribution of phenotypic variation changes over evolutionary time as a result of the evolution of development, the selective pressures on traits change as a result of the evolution of ecological interactions, and even the identity of the evolutionary unit changes as a result of the evolution of new reproductive strategies and new mechanisms of inheritance. The circular causality implied by an evolutionary process that alters its own mechanisms results in conceptual difficulties and controversies in many areas of evolutionary biology. However, in computer science, the idea that an algorithmic process can improve over time as a function of past experience, including its own past behaviour, has been thoroughly studied in the field of machine learning.
Sigma-Pi Learning: On Radial Basis Functions and Cortical Associative Learning
Mel, Bartlett W., Koch, Christof
The goal in this work has been to identify the neuronal elements of the cortical column that are most likely to support the learning of nonlinear associative maps. We show that a particular style of network learning algorithm based on locally-tuned receptive fields maps naturally onto cortical hardware, and gives coherence to a variety of features of cortical anatomy, physiology, and biophysics whose relations to learning remain poorly understood.
Sigma-Pi Learning: On Radial Basis Functions and Cortical Associative Learning
Mel, Bartlett W., Koch, Christof
The goal in this work has been to identify the neuronal elements of the cortical column that are most likely to support the learning of nonlinear associative maps. We show that a particular style of network learning algorithm based on locally-tuned receptive fields maps naturally onto cortical hardware, and gives coherence to a variety of features of cortical anatomy, physiology, and biophysics whose relations to learning remain poorly understood.
Computer Modeling of Associative Learning
Alkon, Daniel L., Quek, Francis K. H., Vogl, Thomas P.
This paper describes an ongoing effort which approaches neural net research in a program of close collaboration of neurosc i ent i sts and eng i neers. The effort is des i gned to elucidate associative learning in the marine snail Hermissenda crassicornist in which Pavlovian conditioning has been observed. Learning has been isolated in the four neuron network at the convergence of the v i sua 1 and vestibular pathways in this animal t and biophysical changes t specific to learning t have been observed in the membrane of the photoreceptor B cell. A basic charging capacitance model of a neuron is used and enhanced with biologically plausible mechanisms that are necessary to replicate the effect of learning at the cellular level. These mechanisms are nonlinear and are t primarilYt instances of second order control systems (e.g.
Associative Learning via Inhibitory Search
ALVIS is a reinforcement-based connectionist architecture that learns associative maps in continuous multidimensional environments. The discovered locations of positive and negative reinforcements are recorded in "do be" and "don't be" subnetworks, respectively. The outputs of the subnetworks relevant to the current goal are combined and compared with the current location to produce an error vector. This vector is backpropagated through a motor-perceptual mapping network.
- Asia > Middle East > Jordan (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > San Diego County > La Jolla (0.04)